1,034 research outputs found
A Decidable Multi-agent Logic for Reasoning About Actions, Instruments, and Norms
We formally introduce a novel, yet ubiquitous, category of norms: norms of instrumentality. Norms of this category describe which actions are obligatory, or prohibited, as instruments for certain purposes. We propose the Logic of Agency and Norms (LAN) that enables reasoning about actions, instrumentality, and normative principles in a multi-agent setting. Leveraging LAN , we formalize norms of instrumentality and compare them to two prevalent norm categories: norms to be and norms to do. Last, we pose principles relating the three categories and evaluate their validity vis-à-vis notions of deliberative acting. On a technical note, the logic will be shown decidable via the finite model property
Achieving while maintaining:A logic of knowing how with intermediate constraints
In this paper, we propose a ternary knowing how operator to express that the
agent knows how to achieve given while maintaining
in-between. It generalizes the logic of goal-directed knowing how proposed by
Yanjing Wang 2015 'A logic of knowing how'. We give a sound and complete
axiomatization of this logic.Comment: appear in Proceedings of ICLA 201
Policies to Regulate Distributed Data Exchange
This research is partially sponsored by the EPSRC grant EP/P011829/1, funded under the UK Engineering and Physical Sciences Council Human Dimensions of Cyber Security call (2016).Postprin
The open future, bivalence and assertion
It is highly intuitive that the future is open and the past is closed—whereas it is unsettled whether there will be a fourth world war, it is settled that there was a first. Recently, it has become increasingly popular to claim that the intuitive openness of the future implies that contingent statements about the future, such as ‘there will be a sea battle tomorrow,’ are non-bivalent (neither true nor false). In this paper, we argue that the non-bivalence of future contingents is at odds with our pre-theoretic intuitions about the openness of the future. These are revealed by our pragmatic judgments concerning the correctness and incorrectness of assertions of future contingents. We argue that the pragmatic data together with a plausible account of assertion shows that in many cases we take future contingents to be true (or to be false), though we take the future to be open in relevant respects. It follows that appeals to intuition to support the non-bivalence of future contingents is untenable. Intuition favours bivalence
Towards Logical Specification of Statistical Machine Learning
We introduce a logical approach to formalizing statistical properties of
machine learning. Specifically, we propose a formal model for statistical
classification based on a Kripke model, and formalize various notions of
classification performance, robustness, and fairness of classifiers by using
epistemic logic. Then we show some relationships among properties of
classifiers and those between classification performance and robustness, which
suggests robustness-related properties that have not been formalized in the
literature as far as we know. To formalize fairness properties, we define a
notion of counterfactual knowledge and show techniques to formalize conditional
indistinguishability by using counterfactual epistemic operators. As far as we
know, this is the first work that uses logical formulas to express statistical
properties of machine learning, and that provides epistemic (resp.
counterfactually epistemic) views on robustness (resp. fairness) of
classifiers.Comment: SEFM'19 conference paper (full version with errors corrected
What Is an Act of Engagement? Between the Social, Collegial and Institutional Protocols
Engagement is not synonymous with commitment, even though both words are used in translations between English, French, and German. However, engagement is also not some supplementary phenomenon or a technical term that the phrase social acts already includes in itself or that the concepts of ‘commitment’ or ‘joint commitment’ somehow necessarily imply. In this article I would like to describe a special kind of social act and determine the function they have in relation between various agents. Most importantly, I would like to define their significance in the transformation of a group into an institution or higher order entity. My premise is that there are acts whose aim is to engage all others, since they refer to all of us together, and in so doing reduce negative (social) “acts” as well as various asocial behaviors within a group or institution. In this sense, engaged acts could alternatively also belong to a kind of institutional act, since they introduce certain adjustments to the institution, changing or modifying its rules, increasing its consistency and efficiency.First book series in Philosophy of the Social Sciences that specifically focuses on Philosophy of Sociality and Social Ontology.
Studies in the Philosophy of Sociality
Volume 1
Statistical Epistemic Logic
We introduce a modal logic for describing statistical knowledge, which we
call statistical epistemic logic. We propose a Kripke model dealing with
probability distributions and stochastic assignments, and show a stochastic
semantics for the logic. To our knowledge, this is the first semantics for
modal logic that can express the statistical knowledge dependent on
non-deterministic inputs and the statistical significance of observed results.
By using statistical epistemic logic, we express a notion of statistical
secrecy with a confidence level. We also show that this logic is useful to
formalize statistical hypothesis testing and differential privacy in a simple
and abstract manner
How AI Systems Challenge the Conditions of Moral Agency?
The article explores the effects increasing automation has on our conceptions of human agency. We conceptualize the central features of human agency as ableness, intentionality, and rationality and define responsibility as a central feature of moral agency. We discuss suggestions in favor of holding AI systems moral agents for their functions but join those who refute this view. We consider the possibility of assigning moral agency to automated AI systems in settings of machine-human cooperation but come to the conclusion that AI systems are not genuine participants in joint action and cannot be held morally responsible. Philosophical issues notwithstanding, the functions of AI systems change human agency as they affect our goal setting and pursuing by influencing our conceptions of the attainable. Recommendation algorithms on news sites, social media platforms, and in search engines modify our possibilities to receive accurate and comprehensive information, hence influencing our decision making. Sophisticated AI systems replace human workforce even in such demanding fields as medical surgery, language translation, visual arts, and composing music. Being second to a machine in an increasing number of fields of expertise will affect how human beings regard their own abilities. We need a deeper understanding of how technological progress takes place and how it is intertwined with economic and political realities. Moral responsibility remains a human characteristic. It is our duty to develop AI to serve morally good ends and purposes. Protecting and strengthening the conditions of human agency in any AI environment is part of this task.Peer reviewe
The Metaphysics of the Thin Red Line
There seems to be a minimal core that every theory wishing to accommodate the intuition that the future is open must contain: a denial of physical determinism (i.e. the thesis that what future states the universe will be in is implied by what states it has been in), and a denial of strong fatalism (i.e. the thesis that, at every time, what will subsequently be the case is metaphysically necessary). 1 Those two requirements are often associated with the idea of an objective temporal flow and the non-reality of the future. However, at least certain ways to frame the “openness” intuition do not rely on any of these. Branching Time Theory (BTT) is one such: it is compatible with the denial that time flow is objective and it is couched in a language with a (prima facie) commitment to an eternalist ontology. BTT, though, urges us to resist certain intuitions about the determinacy of future claims, which arguably do not lead either to physical determinism or to fatalism. Against BTT, supporters of the Thin Red Line Theory (TRL) argue that their position avoids determinism and fatalism, while also representing the fact that there is a future which is “special ” because it is the one that will be the case. But starting with Belnap and Green 1994, some have objected to the tenability of TRL, mainly on metaphysical grounds. In particular, those argue that “positing a thin red line amounts to giving up objective indeterminism, ” 2 and that “has unacceptable consequences, ranging from a mistreatment of actuality to an inability to talk coherently about what would have happened had what is going to happen not taken place. ” 3 In this paper, we wish to reframe the 1 Hence, strong fatalism implies physical determinism, while the latter does not imply the former, thus being compatible with the world having been otherwise, assuming that the initial condition of the world could have been otherwise. Also, strong fatalism is intended as opposed to weak fatalism, according to which, whatever I will do now won’t affect what will be the case. Weak fatalism, instead, does not imply, nor is implied, by physical determinism
Causal circuit explanations of behavior: Are necessity and sufficiency necessary and sufficient?
In the current advent of technological innovation allowing for precise neural manipulations and copious data collection, it is hardly questioned that the explanation of behavioral processes is to be chiefly found in neural circuits. Such belief, rooted in the exhausted dualism of cause and effect, is enacted by a methodology that promotes “necessity and sufficiency” claims as the goal-standard in neuroscience, thus instructing young students on what shall reckon as explanation. Here we wish to deconstruct and explicate the difference between what is done, what is said, and what is meant by such causal circuit explanations of behavior. Well-known to most philosophers, yet ignored or at least hardly ever made explicit by neuroscientists, the original grand claim of “understanding the brain” is imperceptibly substituted by the methodologically sophisticated task of empirically establishing counterfactual dependencies. But for the 21st century neuroscientist, after so much pride, this is really an excess of humility. I argue that to upgrade intervention to explanation is prone to logical fallacies, interpretational leaps and carries a weak explanatory force, thus settling and maintaining low standards for intelligibility in neuroscience. To claim that behavior is explained by a “necessary and sufficient” neural circuit is, at best, misleading. In that, my critique (rather than criticism) is indeed mainly negative. Positively, I briefly suggest some available alternatives for conceptual progress, such as adopting circular causality (rather than lineal causality in the flavor of top-down reductionism), searching for principles of behavior(rather than taking an arbitrary definition of behavior and rushing to dissect its “underlying” neural mechanisms), and embracing process philosophy (rather than substance-mechanistic ontologies). Overall, if the goal of neuroscience is to understand the relation between brain and behavior then, in addition to excruciating neural studies (one pillar), we will need a strong theory of behavior (the other pillar) and a solid foundation to establish their relation (the bridge)
- …